369 research outputs found

    Transfer learning for time series classification

    Full text link
    Transfer learning for deep neural networks is the process of first training a base network on a source dataset, and then transferring the learned features (the network's weights) to a second network to be trained on a target dataset. This idea has been shown to improve deep neural network's generalization capabilities in many computer vision tasks such as image recognition and object localization. Apart from these applications, deep Convolutional Neural Networks (CNNs) have also recently gained popularity in the Time Series Classification (TSC) community. However, unlike for image recognition problems, transfer learning techniques have not yet been investigated thoroughly for the TSC task. This is surprising as the accuracy of deep learning models for TSC could potentially be improved if the model is fine-tuned from a pre-trained neural network instead of training it from scratch. In this paper, we fill this gap by investigating how to transfer deep CNNs for the TSC task. To evaluate the potential of transfer learning, we performed extensive experiments using the UCR archive which is the largest publicly available TSC benchmark containing 85 datasets. For each dataset in the archive, we pre-trained a model and then fine-tuned it on the other datasets resulting in 7140 different deep neural networks. These experiments revealed that transfer learning can improve or degrade the model's predictions depending on the dataset used for transfer. Therefore, in an effort to predict the best source dataset for a given target dataset, we propose a new method relying on Dynamic Time Warping to measure inter-datasets similarities. We describe how our method can guide the transfer to choose the best source dataset leading to an improvement in accuracy on 71 out of 85 datasets.Comment: Accepted at IEEE International Conference on Big Data 201

    Adversarial Attacks on Deep Neural Networks for Time Series Classification

    Full text link
    Time Series Classification (TSC) problems are encountered in many real life data mining tasks ranging from medicine and security to human activity recognition and food safety. With the recent success of deep neural networks in various domains such as computer vision and natural language processing, researchers started adopting these techniques for solving time series data mining problems. However, to the best of our knowledge, no previous work has considered the vulnerability of deep learning models to adversarial time series examples, which could potentially make them unreliable in situations where the decision taken by the classifier is crucial such as in medicine and security. For computer vision problems, such attacks have been shown to be very easy to perform by altering the image and adding an imperceptible amount of noise to trick the network into wrongly classifying the input image. Following this line of work, we propose to leverage existing adversarial attack mechanisms to add a special noise to the input time series in order to decrease the network's confidence when classifying instances at test time. Our results reveal that current state-of-the-art deep learning time series classifiers are vulnerable to adversarial attacks which can have major consequences in multiple domains such as food safety and quality assurance.Comment: Accepted at IJCNN 201

    Deep learning for time series classification: a review

    Get PDF
    Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-of-the-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR/UEA archive) and 12 multivariate time series datasets. By training 8,730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.Comment: Accepted at Data Mining and Knowledge Discover

    The TopModL Initiative

    Get PDF
    International audienceWe believe that there is a very strong need for an environment to support research and experiments on model-driven engineering. Therefore we have started the TopModL project, an open-source initiative, with the goal of building a development community to provide: (1) an executable environment for quick and easy experimentation, (2) a set of source files and a compilation tool chain, (3) a web portal to share artefacts developed by the community. The aim of TopModL is to help the model-engineering research community by providing the quickest path between a research idea and a running prototype. In addition, we also want to identify all the possible contributions, understand how to make it easy to integrate existing components, while maintaining architectural integrity. At the time of writing we have almost completed the bootstrap phase (known as Blackhole), which means that we can model TopModL and generate TopModL with TopModL. Beyond this first phase, it is now of paramount importance to gather the best possible description of the requirements of the community involved in model-driven engineering to further develop TopModL, and also to make sure that we are able to reuse or federate existing efforts or goodwill. This paper is more intended to set up a basis for a constructive discussion than to offer definitive answers and closed solutions

    Modeling Modeling Modeling

    Get PDF
    International audienceModel-driven engineering and model-based approaches have permeated all branches of software engineering to the point that it seems that we are using models, as Molière's Monsieur Jourdain was using prose, without knowing it. At the heart of modeling, there is a relation that we establish to represent something by something else. In this paper we review various definitions of models and relations between them. Then, we define a canonical set of relations that can be used to express various kinds of representation relations and we propose a graphical concrete syntax to represent these relations. We also define a structural definition for this language in the form of a metamodel and a formal interpretation using Prolog. Hence, this paper is a contribution towards a theory of modeling

    Metamodel-Aware Textual Concrete Syntax Specification

    Get PDF
    Metamodeling is raising more and more interest in the field of language engineering. While this approach is now well understood for the definition of abstract syntaxes, the formal definition of concrete syntaxes is still a challenge. Concrete syntaxes are traditionally expressed with rules, conforming to EBNF-like grammars, which can be processed by compiler compilers to generate parsers. Unfortunately, these generated parsers produce concrete syntax trees, leaving a gap with the abstract syntax defined by metamodels. This gap is usually filled by time consuming ad-hoc hand-coding. In this paper we propose a new kind of specification for concrete syntaxes that takes advantage of metamodels to generate tools (such as parsers or text generators) which directly manipulate abstract syntax trees. The principle is to map abstract syntaxes to concrete syntaxes via EBNF-like rules that explain how to render an abstract concept into a given concrete syntax, and how to trigger other rules to handle the properties of the concepts. The major difference with EBNF is that rules may have sub-rules, which can be automatically triggered based on the inheritance hierarchy of the abstract syntax concepts

    Prospective randomized controlled trial of simulator-based versus traditional in-surgery laparoscopic camera navigation training

    Get PDF
    Background: Surgical residents often use a laparoscopic camera in minimally invasive surgery for the first time in the operating room (OR) with no previous education or experience. Computer-based simulator training is increasingly used in residency programs. However, no randomized controlled study has compared the effect of simulator-based versus the traditional OR-based training of camera navigation skills. Methods: This prospective randomized controlled study included 24 pregraduation medical students without any experience in camera navigation or simulators. After a baseline camera navigation test in the OR, participants were randomized to six structured simulator-based training sessions in the skills lab (SL group) or to the traditional training in the OR navigating the camera during six laparoscopic interventions (OR group). After training, the camera test was repeated. Videos of all tests (including of 14 experts) were rated by five blinded, independent experts according to a structured protocol. Results: The groups were well randomized and comparable. Both training groups significantly improved their camera navigational skills in regard to time to completion of the camera test (SL P=0.049; OR P=0.02) and correct organ visualization (P=0.04; P=0.03). Horizon alignment improved without reaching statistical significance (P=0.20; P=0.09). Although both groups spent an equal amount of actual time on camera navigation training (217 vs. 272min, P=0.20), the SL group spent significantly less overall time in the skill lab than the OR group spent in the operating room (302 vs. 1002min, P<0.01). Conclusion: This is the first prospective randomized controlled study indicating that simulator-based training of camera navigation can be transferred to the OR using the traditional hands-on training as controls. In addition, simulator camera navigation training for laparoscopic surgery is as effective but more time efficient than traditional teachin

    VETESS : IDM, Test et SysML

    No full text
    Selected paper from the 7-th NEPTUNE WorkshopNational audienceIl apparaît souvent que les processus d'ingénierie système sont en fait décomposés en phases discontinues oùtrop peu d'informations sont partagées entre les différentes équipes, par exemple entre les équipes de design et de tests.Cette faiblesse peut être palliée par l’utilisation de modèles de spécifications qui jouent alors le rôle de référentiel pourl’ensemble des équipes participant au cycle de vie du logiciel. Ce type de modèle est couramment utilisé comme basedans les activités de conception, de vérification, ou encore de test. Le test basé sur les modèles est une approcheoriginale où sont automatiquement générés des cas de test et des scripts de test exécutables à partir d'une spécificationdu système sous test. Cette spécification prend la forme d'un modèle comportemental, permettant ainsi au générateur detests de déterminer, d'une part, quels sont les contextes d'exécution pertinents et, d'autre part, de prédire les effets sur lesystème de ces exécutions. Le but du projet VETESS est de rendre possible cette approche pour valider les systèmesembarqués automobiles. Il s’agit ainsi de mettre en œuvre et d’outiller un processus automatique permettant de dériver,d'un modèle de spécification décrit avec un sous-ensemble du langage de modélisation SysML, des cas de test, et deproduire ensuite les scripts de test correspondants à exécuter sur banc de test automobiles

    Web site audience segmentation using hybrid alignment techniques

    Get PDF
    We are working on behavioral marketing in the Internet. On one hand we observe the behavior of visitors, and on the other hand we trigger (in real-time) stimulations intended to alter this behavior. Real-time and mass-customization are the two challenges that we have to address. In this paper, we present a hybrid approach for clustering visitor sessions, based on a combination of global and local sequence alignments, such as Needleman-Wunsch and Smith-Waterman. Our goal is to define very simple approaches able to address about 80 % of visitor sessions to be segmented, and which can be easily turned into small pieces of program, to be run in parallel in thousands of web browsers

    Big metamodels are evil

    Get PDF
    While reuse is typically considered a good practice, it may also lead to keeping irrelevant concerns in derived elements. For instance, new metamodels are usually built upon existing metamodels using additive techniques such as profiling and package merge. With such additive techniques, new metamodels tend to become bigger and bigger, which leads to harmful overheads of complexity for both tool builders and users. In this paper, we introduce ≪ package unmerge≫ - a proposal for a subtractive relation between packages - which complements existing metamodel-extension techniques
    • …
    corecore